Compressing Grasping Experience into a Dictionary of Prototypical Grasp-predicting Parts
نویسندگان
چکیده
We present a real-world robotic agent that is capable of transferring grasping strategies across objects that share similar parts. The agent transfers grasps across objects by identifying, from examples provided by a teacher, parts by which objects are often grasped in a similar fashion. It then uses these parts to identify grasping points onto novel objects. Because most human environments make it infeasible to pre-program grasping behaviors for every object the robot might encounter, grasping novel objects is a key issue in human-friendly robotics. Recent approaches to grasping novel objects aim at devising a direct mapping from visual features to grasp parameters. A central question in such approaches is what visual features to use. Some authors have shown that grasps can be computed from local visual features [3]. However, local features suffer from poor geometric resolution, which makes it difficult to accurately compute the 6D pose of a gripper. By contrast, using object parts as features allows robots to compute grasps of high geometric accuracy [1], [2], [4]. We present a method that allows a robot to learn to formulate grasp plans from visual data obtained from a 3D sensor. Our method relies on the identification of prototypical parts by which objects are often grasped. To this end, we provide the robot with means of identifying, from a set of grasp examples, the 3D shape of parts that are recurrently observed within the manipulator during the grasps. Our approach effectively compresses the training data, generating a dictionary of prototypical parts that is an order of magnitude smaller than the training dataset. As prototypical parts are extracted from grasp examples, each of them automatically inherits a grasping strategy that parametrizes (1) the position and orientation of the manipulator with respect to the part, and (2) the finger preshape, i.e., the configuration in which fingers should be set prior to grasping. When a novel object appears, the robot tries to fit the prototypical parts to a snapshot that partially captures the object. The grasp associated to the part that best fits the snapshot can be executed to manipulate the object. A key aspect of our work is that the shape and the spatial extent (or size) of the prototypes generated by our method directly result from
منابع مشابه
How can I , robot , pick up that object with my hand ?
This paper describes a practical approach to the robot grasping problem. An approach that is composed of two different parts. First, a vision-based grasp synthesis system implemented on a humanoid robot able to compute a set of feasible grasps and to execute any of them. This grasping system takes into account gripper kinematics constraints and uses little computational effort. Second, a learni...
متن کاملUsing Experience for Assessing Grasp Reliability
Autonomous manipulation is a key issue for a humanoid robot. Here, we are interested in a vision-based grasping behavior so that the robot can deal with previously unknown objects in real time and in an intelligent manner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A ...
متن کاملSemantic and Geometric Scene Understanding for Single-view Task-oriented Grasping of Novel Objects
We present a task-oriented grasp model, that learns grasps that are configurationally compatible with a given task. The model consists of a geometric grasp model, and a semantic grasp model. The geometric model relies on a dictionary of grasp prototypes that are learned from experience, while the semantic model is CNN-based and identifies scene regions that are compatible with a specific task. ...
متن کاملSemantic and Geometric Scene Understanding for Task-oriented Grasping of Novel Objects from a Single View
We present a task-oriented grasp model, that learns grasps that are configurationally compatible with a given task. The model consists of a geometric grasp model, and a semantic grasp model. The geometric model relies on a dictionary of grasp prototypes that are learned from experience, while the semantic model is CNN-based and identifies scene regions that are compatible with a specific task. ...
متن کاملTactile Experience-based Robotic Grasping
We propose an experience-based approach to the problem of blind grasping, stable robotic grasping using tactile sensing and hand kinematic feedback. We first collect a set of stable grasps to build a tactile experience database which contains tactile contacts for each stable grasp. Using the tactile experience database, we propose an algorithm to synthesize local hand adjustment that controls t...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2012